Goto

Collaborating Authors

 change point


Bayesian Model Selection Approach to Boundary Detection with Non-Local Priors

Neural Information Processing Systems

Based on non-local prior distributions, we propose a Bayesian model selection (BMS) procedure for boundary detection in a sequence of data with multiple systematic mean changes. The BMS method can effectively suppress the non-boundary spike points with large instantaneous changes.





The Cost of Learning under Multiple Change Points

Gafni, Tomer, Iyengar, Garud, Zeevi, Assaf

arXiv.org Machine Learning

We consider an online learning problem in environments with multiple change points. In contrast to the single change point problem that is widely studied using classical "high confidence" detection schemes, the multiple change point environment presents new learning-theoretic and algorithmic challenges. Specifically, we show that classical methods may exhibit catastrophic failure (high regret) due to a phenomenon we refer to as endogenous confounding. To overcome this, we propose a new class of learning algorithms dubbed Anytime Tracking CUSUM (ATC). These are horizon-free online algorithms that implement a selective detection principle, balancing the need to ignore "small" (hard-to-detect) shifts, while reacting "quickly" to significant ones. We prove that the performance of a properly tuned ATC algorithm is nearly minimax-optimal; its regret is guaranteed to closely match a novel information-theoretic lower bound on the achievable performance of any learning algorithm in the multiple change point problem. Experiments on synthetic as well as real-world data validate the aforementioned theoretical findings.


f6ccfa588d2a95bef5a3b101c02524c9-Supplemental-Conference.pdf

Neural Information Processing Systems

It is known that Binary Segmentation is consistent but not optimal (Venkatraman (1992)). As an improvement, Fryzlewicz (2014) propose WildBinary Segmentation andshowthatithasabetter localization rate.





AdaptiveOnlineEstimationofPiecewisePolynomial Trends

Neural Information Processing Systems

We consider the framework of non-stationary stochastic optimization [Besbes et al., 2015] with squared error losses and noisy gradient feedback where the dynamic regret ofanonline learner against atime varying comparator sequence isstudied.